我们讨论集群分析的拓扑方面,并表明在聚类之前推断数据集的拓扑结构可以大大增强群集检测:理论论证和经验证据表明,聚类嵌入向量,代表数据歧管的结构,而不是观察到的特征矢量他们自己是非常有益的。为了证明,我们将流形学习方法与基于密度的聚类方法DBSCAN结合了歧管学习方法UMAP。合成和真实数据结果表明,这既简化和改善了多种低维问题,包括密度变化和/或纠缠形状的群集。我们的方法简化了聚类,因为拓扑预处理始终降低DBSCAN的参数灵敏度。然后,用dbscan聚类所得的嵌入可以超过诸如spectacl和clustergan之类的复杂方法。最后,我们的调查表明,聚类中的关键问题似乎不是数据的标称维度或其中包含多少不相关的功能,而是\ textIt {可分离}群集在环境观察空间中的\ textit {可分离},它们嵌入了它们中。 ,通常是数据特征定义的(高维)欧几里得空间。我们的方法之所以成功,是因为我们将数据投影到更合适的空间后,从某种意义上说,我们执行了群集分析。
translated by 谷歌翻译
安全关键系统通常在调试之前进行危害分析,以识别和分析操作过程中可能出现的潜在危险系统状态。当前,危害分析主要基于人类的推理,过去的经验以及清单和电子表格等简单工具。增加系统复杂性使这种方法非常合适。此外,由于高成本或身体缺陷的危险,基于测试的危害分析通常不适合。对此进行的补救措施是基于模型的危害分析方法,这些方法依赖于正式模型或模拟模型,每个模型都具有自己的好处和缺点。本文提出了一种两层方法,该方法使用正式方法与使用模拟的详细分析结合了详尽分析的好处。首先使用监督控制理论从系统的形式模型中合成了导致不安全状态的不安全行为。结果是输入到模拟的输入,在该模拟中,使用域特异性风险指标进行了详细的分析。尽管提出的方法通常适用,但本文证明了该方法对工业人类机器人协作系统的好处。
translated by 谷歌翻译
我们提出了一种在线和数据驱动的不确定性量化方法,以实现安全的人类机器人协作应用程序的开发。安全性和系统的风险评估与测量的准确性密切相关:通常无法通过已知模型直接访问独特的参数,因此必须测量。但是,由于传感器的性能有限,甚至未知的环境干扰或人类,测量值通常会遭受不确定性的影响。在这项工作中,我们通过利用具有定量的,系统特定属性的保护措施来量化这些测量不确定性,这些措施会随时间,空间或其他状态空间维度恒定。我们方法的关键思想在于在运行时间参考保护方程式期间对传入数据的直接数据评估。特别是,我们估计违反已知的域名特定域保护特性的行为,并将其视为测量不确定性的结果。我们在人类机器人协作的背景下验证了用例验证我们的方法,从而强调了我们在现实环境下(例如在工业环境中)成功开发安全机器人系统的贡献的重要性。此外,我们还展示了如何将获得的不确定性值直接映射到任意安全限制(例如ISO 13849),该限制允许在运行时监视符合安全标准的符合性。
translated by 谷歌翻译
折叠服装可靠,有效地是由于服装的复杂动力学和高尺寸配置空间,在机器人操作中是一项漫长的挑战。一种直观的方法是最初在折叠之前将服装操纵到典型的平滑配置。在这项工作中,我们开发了一种可靠且高效的双人系统,将用户定义的指令视为折叠线,将最初弄皱的服装操纵为(1)平滑和(2)折叠配置。我们的主要贡献是一种新型的神经网络体系结构,能够预测成对的握把姿势,以参数化各种双人动作原始序列。在从4300次人类注销和自我监督的动作中学习后,机器人能够平均从120年代以下的随机初始配置折叠服装,成功率为93%。现实世界实验表明,该系统能够概括到不同颜色,形状和刚度的服装。虽然先前的工作每小时达到3-6倍(FPH),但SpeedFolding却达到30-40 FPH。
translated by 谷歌翻译
本文提出了一种验证网络物理安全 - 关键系统中发现的非线性人工神经网络(ANN)行为的方法。我们将Sigmoid函数的专用间隔约束传播器实施到SMT求解器ISAT中,并将这种方法与组成方法进行比较,该方法通过ISAT中可用的基本算术特征和近似方法来编码Sigmoid函数。我们的实验结果表明,专用和组成方法明显优于近似方法。在我们所有的基准中,专门的方法与组成方法相比表现出相等或更好的性能。
translated by 谷歌翻译
在科学文献中,这是一个基本事实,即FeedForward完全连接的整流线性单元(relu)人工神经网络(ANN)的Lipschitz规范可以通过上面的总和来限制乘法常数ANN参数矢量规范的幂。粗略地说,在这项工作中,我们揭示了浅不会的,即交谈不平等也是正确的。更正式地,我们证明具有相同实现函数的ANN参数向量的等价类别的规范是由上面的乘法常数,该常数是由ANN实现函数的Lipschitz Norm的幂等(指数) $ 1/2 $和$ 1 $)。此外,我们证明,这种上限仅在使用Lipschitz Norm时才能存在,但既不适用于“较旧的规范,也不适合Sobolev-slobodeckij规范。此外,我们证明,这种上限仅适用于Lipschitz Norm of Lipschitz Norm的力量指数$ 1/2 $和$ 1 $,但仅凭Lipschitz Norm就不满意。
translated by 谷歌翻译
我们介绍了一种新的基于模拟的方法,以识别人类机器人协作中意外的工人行为导致的危害。基于仿真的安全测试必须考虑到人类行为是变量的事实,并且可能发生人为错误。当仅模拟预期的工人行为时,严重的危险可以保持未被发现。另一方面,模拟所有可能的工人行为是计算不可行的。这提出了如何找到有趣数量的模拟运行的有趣(即潜在危险的)工作行为的问题。我们将其框架作为可能的工人行为的空间中的搜索问题。因为这个搜索空间可以得到非常复杂的,我们介绍以下措施:(1)基于工作流约束的搜索空间限制,(2)行为的优先级,基于它们偏离标称行为,(3)使用风险指标指导寻求高风险行为,这更有可能暴露危险。我们在协作工作流程中展示了涉及人工工人,机器人臂和移动机器人的协作工作流程方案的方法。
translated by 谷歌翻译
In this paper, we propose a novel framework dubbed peer learning to deal with the problem of biased scene graph generation (SGG). This framework uses predicate sampling and consensus voting (PSCV) to encourage different peers to learn from each other, improving model diversity and mitigating bias in SGG. To address the heavily long-tailed distribution of predicate classes, we propose to use predicate sampling to divide and conquer this issue. As a result, the model is less biased and makes more balanced predicate predictions. Specifically, one peer may not be sufficiently diverse to discriminate between different levels of predicate distributions. Therefore, we sample the data distribution based on frequency of predicates into sub-distributions, selecting head, body, and tail classes to combine and feed to different peers as complementary predicate knowledge during the training process. The complementary predicate knowledge of these peers is then ensembled utilizing a consensus voting strategy, which simulates a civilized voting process in our society that emphasizes the majority opinion and diminishes the minority opinion. This approach ensures that the learned representations of each peer are optimally adapted to the various data distributions. Extensive experiments on the Visual Genome dataset demonstrate that PSCV outperforms previous methods. We have established a new state-of-the-art (SOTA) on the SGCls task by achieving a mean of \textbf{31.6}.
translated by 谷歌翻译
Ensemble learning combines results from multiple machine learning models in order to provide a better and optimised predictive model with reduced bias, variance and improved predictions. However, in federated learning it is not feasible to apply centralised ensemble learning directly due to privacy concerns. Hence, a mechanism is required to combine results of local models to produce a global model. Most distributed consensus algorithms, such as Byzantine fault tolerance (BFT), do not normally perform well in such applications. This is because, in such methods predictions of some of the peers are disregarded, so a majority of peers can win without even considering other peers' decisions. Additionally, the confidence score of the result of each peer is not normally taken into account, although it is an important feature to consider for ensemble learning. Moreover, the problem of a tie event is often left un-addressed by methods such as BFT. To fill these research gaps, we propose PoSw (Proof of Swarm), a novel distributed consensus algorithm for ensemble learning in a federated setting, which was inspired by particle swarm based algorithms for solving optimisation problems. The proposed algorithm is theoretically proved to always converge in a relatively small number of steps and has mechanisms to resolve tie events while trying to achieve sub-optimum solutions. We experimentally validated the performance of the proposed algorithm using ECG classification as an example application in healthcare, showing that the ensemble learning model outperformed all local models and even the FL-based global model. To the best of our knowledge, the proposed algorithm is the first attempt to make consensus over the output results of distributed models trained using federated learning.
translated by 谷歌翻译
Problem statement: Standardisation of AI fairness rules and benchmarks is challenging because AI fairness and other ethical requirements depend on multiple factors such as context, use case, type of the AI system, and so on. In this paper, we elaborate that the AI system is prone to biases at every stage of its lifecycle, from inception to its usage, and that all stages require due attention for mitigating AI bias. We need a standardised approach to handle AI fairness at every stage. Gap analysis: While AI fairness is a hot research topic, a holistic strategy for AI fairness is generally missing. Most researchers focus only on a few facets of AI model-building. Peer review shows excessive focus on biases in the datasets, fairness metrics, and algorithmic bias. In the process, other aspects affecting AI fairness get ignored. The solution proposed: We propose a comprehensive approach in the form of a novel seven-layer model, inspired by the Open System Interconnection (OSI) model, to standardise AI fairness handling. Despite the differences in the various aspects, most AI systems have similar model-building stages. The proposed model splits the AI system lifecycle into seven abstraction layers, each corresponding to a well-defined AI model-building or usage stage. We also provide checklists for each layer and deliberate on potential sources of bias in each layer and their mitigation methodologies. This work will facilitate layer-wise standardisation of AI fairness rules and benchmarking parameters.
translated by 谷歌翻译